motor control
Can a new book crack one of neuroscience's hardest problems? Not quite
The ideas presented in George Lakoff and Srini Narayanan's The Neural Mind are fascinating, but the writing is far less compelling This is a book review in two parts. The first is about the ideas presented in The Neural Mind: How brains think, which are fascinating. The second is about the actual experience of reading it. The book tackles one of the biggest questions in neuroscience: how do neurons perform all the different kinds of human thought possible, from planning motor actions to composing sentences and musing about philosophy? The authors have very different perspectives.
- Europe > Switzerland > Zürich > Zürich (0.15)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Europe > United Kingdom > England > Devon (0.05)
Arnold: a generalist muscle transformer policy
Chiappa, Alberto Silvio, An, Boshi, Simos, Merkourios, Li, Chengkun, Mathis, Alexander
Controlling high-dimensional and nonlinear musculoskeletal models of the human body is a foundational scientific challenge. Recent machine learning breakthroughs have heralded policies that master individual skills like reaching, object manipulation and locomotion in musculoskeletal systems with many degrees of freedom. However, these agents are merely "specialists", achieving high performance for a single skill. In this work, we develop Arnold, a generalist policy that masters multiple tasks and embodiments. Arnold combines behavior cloning and fine-tuning with PPO to achieve expert or super-expert performance in 14 challenging control tasks from dexterous object manipulation to locomotion. A key innovation is Arnold's sensorimotor vocabulary, a compositional representation of the semantics of heterogeneous sensory modalities, objectives, and actuators. Arnold leverages this vocabulary via a transformer architecture to deal with the variable observation and action spaces of each task. This framework supports efficient multi-task, multi-embodiment learning and facilitates rapid adaptation to novel tasks. Finally, we analyze Arnold to provide insights into biological motor control, corroborating recent findings on the limited transferability of muscle synergies across tasks.
- Europe > Switzerland > Vaud > Lausanne (0.04)
- North America > United States > Utah (0.04)
- North America > United States > Illinois > Champaign County > Champaign (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
Projecting the New Body: How Body Image Evolves During Learning to Walk with a Wearable Robot
Advances in wearable robotics challenge the traditional definition of human motor systems, as wearable robots redefine body structure, movement capability, and perception of their own bodies. While these devices can empower the wearer's motor performance, there is limited understanding of how wearer s update their perception of body images, especially images in dynamic movements, while learning to use these modern devices. This study aimed to fill the gap by examining the changes of body image as individuals learned to walk with a robotic prosthetic l eg over multi - day training. We measured gait performance and perceived body images via Selected Coefficient of Perceived Motion (SCoMo) after each training session. Based on human motor learning theory extended to wearer - robot systems, w e hypothesized that learning the perceived body image when walking with a robotic leg co - evolves with the actual gait improvement and becomes more certain and more accurate to the actual motion. Our result confirmed that motor learning improved both physical and perceived ga it pattern towards normal, indicating that via practice the wearers incorporated the robotic leg into their sensorimotor systems to enable wearer - robot movement coordination. However, a persistent discrepancy between perceived and actual motion remained, l ikely due to the absence of direct sensation and control of the prosthesis from wearers. Additionally, the perceptual overestimation at the later training sessions might limit further motor improvement. These findings suggest that enhancing the human sense of wearable robots and frequent calibrating perception of body image are essential for effective training with lower limb wearable robots and for developing more embodied assistive technologies.
- North America > United States > North Carolina > Orange County > Chapel Hill (0.14)
- North America > United States > Ohio > Franklin County > Columbus (0.04)
- North America > United States > North Carolina > Wake County > Raleigh (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
Human sensory-musculoskeletal modeling and control of whole-body movements
Zuo, Chenhui, Lin, Guohao, Zhang, Chen, Zhuang, Shanning, Sui, Yanan
Coordinated human movement depends on the integration of multisensory inputs, sensorimotor transformation, and motor execution, as well as sensory feedback resulting from body-environment interaction. Building dynamic models of the sensory-musculoskeletal system is essential for understanding movement control and investigating human behaviours. Here, we report a human sensory-musculoskeletal model, termed SMS-Human, that integrates precise anatomical representations of bones, joints, and muscle-tendon units with multimodal sensory inputs involving visual, vestibular, proprioceptive, and tactile components. A stage-wise hierarchical deep reinforcement learning framework was developed to address the inherent challenges of high-dimensional control in musculoskeletal systems with integrated multisensory information. Using this framework, we demonstrated the simulation of three representative movement tasks, including bipedal locomotion, vision-guided object manipulation, and human-machine interaction during bicycling. Our results showed a close resemblance between natural and simulated human motor behaviours. The simulation also revealed musculoskeletal dynamics that could not be directly measured. This work sheds deeper insights into the sensorimotor dynamics of human movements, facilitates quantitative understanding of human behaviours in interactive contexts, and informs the design of systems with embodied intelligence.
- Health & Medicine > Therapeutic Area > Musculoskeletal (0.56)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
Brain implant helps woman with paralysis speak with her own voice again
Researchers have developed a new method for intercepting neural signals from the brain of a person with paralysis and translating them into audible speech--all in near real-time. The result is a brain-computer interface (BCI) system similar to an advanced version of Google Translate, but instead of converting one language to another, it deciphers neural data and transforms it into spoken sentences. Recent advancements in machine learning have enabled researchers to train AI voice synthesizers using recordings of the individual's own voice, making the generated speech more natural and personalized. Patients with paralysis have already used BCI to improve physical motor control function by controlling computer mice and prosthetic limbs. This particular system addresses a more specific subsection of patients who have also lost their capacity to speak.
Reinforcement learning-based motion imitation for physiologically plausible musculoskeletal motor control
Simos, Merkourios, Chiappa, Alberto Silvio, Mathis, Alexander
How do humans move? The quest to understand human motion has broad applications in numerous fields, ranging from computer animation and motion synthesis to neuroscience, human prosthetics and rehabilitation. Although advances in reinforcement learning (RL) have produced impressive results in capturing human motion using simplified humanoids, controlling physiologically accurate models of the body remains an open challenge. In this work, we present a model-free motion imitation framework (KINESIS) to advance the understanding of muscle-based motor control. Using a musculoskeletal model of the lower body with 80 muscle actuators and 20 DoF, we demonstrate that KINESIS achieves strong imitation performance on 1.9 hours of motion capture data, is controllable by natural language through pre-trained text-to-motion generative models, and can be fine-tuned to carry out high-level tasks such as target goal reaching. Importantly, KINESIS generates muscle activity patterns that correlate well with human EMG activity. The physiological plausibility makes KINESIS a promising model for tackling challenging problems in human motor control theory, which we highlight by investigating Bernstein's redundancy problem in the context of locomotion. Code, videos and benchmarks will be available at https://github.com/amathislab/Kinesis.
- Asia > Middle East > Jordan (0.04)
- North America > United States > New York (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Education (0.93)
- Health & Medicine > Therapeutic Area > Neurology (0.66)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.85)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
Planning Human-Robot Co-manipulation with Human Motor Control Objectives and Multi-component Reaching Strategies
Haninger, Kevin, Peternel, Luka
For successful goal-directed human-robot interaction, the robot should adapt to the intentions and actions of the collaborating human. This can be supported by musculoskeletal or data-driven human models, where the former are limited to lower-level functioning such as ergonomics, and the latter have limited generalizability or data efficiency. What is missing, is the inclusion of human motor control models that can provide generalizable human behavior estimates and integrate into robot planning methods. We use well-studied models from human motor control based on the speed-accuracy and cost-benefit trade-offs to plan collaborative robot motions. In these models, the human trajectory minimizes an objective function, a formulation we adapt to numerical trajectory optimization. This can then be extended with constraints and new variables to realize collaborative motion planning and goal estimation. We deploy this model, as well as a multi-component movement strategy, in physical collaboration with uncertain goal-reaching and synchronized motion tasks, showing the ability of the approach to produce human-like trajectories over a range of conditions.
- Europe > Netherlands > South Holland > Delft (0.04)
- Europe > Germany > Berlin (0.04)
- Europe > Denmark (0.04)
Modeling the minutia of motor manipulation with AI
In neuroscience and biomedical engineering, accurately modeling the complex movements of the human hand has long been a significant challenge. Current models often struggle to capture the intricate interplay between the brain's motor commands and the physical actions of muscles and tendons. This gap not only hinders scientific progress but also limits the development of effective neuroprosthetics aimed at restoring hand function for those with limb loss or paralysis. EPFL professor Alexander Mathis and his team have developed an AI-driven approach that advances our understanding of these complex motor functions. The team used a creative machine learning strategy that combined curriculum-based reinforcement learning with detailed biomechanical simulations.